Confluent
Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Our cloud-native offering is the foundational platform for data in motion — designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, our customers can meet the new business imperative of delivering rich, digital customer experiences and real-time business operations. Our mission is to help every organization harness data in motion so they can compete and thrive in the modern world.
Featured Products
-
Turn Data Mess to Data Products
Meet the Confluent Data Streaming Platform. Connect and transform all your data into real-time, contextual, trustworthy and reusable universal data products so you can bring applications to market faster.
-
Break data silos with real-time connectivity
Quickly connect data systems and apps with Apache Kafka® leveraging a rich ecosystem of 120+ connectors built by the Kafka experts.
-
Build Streaming Data Pipelines Visually
Fast-track building pipelines powered by Apache Kafka® using a graphical canvas that’s extensible with SQL.
Featured Content
-
The Builder’s Guide to Streaming Data Mesh
Learn how to successfully implement a data mesh and build data products using Confluent’s data streaming platform, leveraging connectors, stream processing, and Stream Governance.
-
Shift Left: Unifying Operations and Analytics With Data Products
Extract-transform-load (ETL) and extract-load-transform (ELT) data pipelines have long been the primary means for getting data into the analytics plane. But data consumers in the analytics domain have had little to no control or influence over the source data model, which is commonly defined by application developers in the operational domain.
-
Multimedia Center
-
Retrieval Augmented Generation (RAG) with Data Streaming
Know How do you prevent hallucinations from large language models (LLMs) in GenAI applications?
-
How to Build a Stream Processing System | Data Streaming Systems
Learn to transition stream processing system from POC to production. Secure app and platform, handle spikes, outages, errors. Automate deployment to focus on value-add app building.
Articles / Case Studies / Videos
-
Data Products, Data Contracts, and Change Data Capture
Reading time: 23 mins
Change Data Capture (CDC) is a popular method for connecting databases to data streams but can expose internal data models, leading to risks; the solution lies in developing first-class data products with data contracts that decouple internal and external models, facilitated by tools like Apache Kafka and Flink, allowing for reliable, high-quality data sharing while…
-
Introducing Confluent Cloud Freight Clusters
Reading time: 5 mins
Confluent has launched Freight clusters, a cost-effective cloud-native solution for high-throughput, relaxed latency workloads, offering savings of up to 90% compared to self-managed Apache Kafka by utilizing a direct write mode to object storage, thereby eliminating costly inter-AZ data replication. Membership Required You must be a member to access this content.View Membership LevelsAlready a member?…
-
-
Contributing to Apache Kafka®: How to Write a KIP
Reading time: 6 mins
I’m brand new to writing KIPs (Kafka Improvement Proposals). I’ve written two so far, and my hands sweat every time I hit send on an email with ‘[DISCUSS] KIP’ in the title. But I’ve also learned a lot from the process: about Apache Kafka® internals, the process of writing KIPs, the Kafka community, and the most...…
-